86 research outputs found

    A note on approximate accelerated forward-backward methods with absolute and relative errors, and possibly strongly convex objectives

    Get PDF
    In this short note, we provide a simple version of an accelerated forward-backward method (a.k.a. Nesterov's accelerated proximal gradient method) possibly relying on approximate proximal operators and allowing to exploit strong convexity of the objective function. The method supports both relative and absolute errors, and its behavior is illustrated on a set of standard numerical experiments. Using the same developments, we further provide a version of the accelerated proximal hybrid extragradient method of Monteiro and Svaiter (2013) possibly exploiting strong convexity of the objective function.Comment: Minor modifications in notations and acknowledgments. These methods were originally presented in arXiv:2006.06041v2. Code available at https://github.com/mathbarre/StronglyConvexForwardBackwar

    Principled Analyses and Design of First-Order Methods with Inexact Proximal Operators

    Full text link
    Proximal operations are among the most common primitives appearing in both practical and theoretical (or high-level) optimization methods. This basic operation typically consists in solving an intermediary (hopefully simpler) optimization problem. In this work, we survey notions of inaccuracies that can be used when solving those intermediary optimization problems. Then, we show that worst-case guarantees for algorithms relying on such inexact proximal operations can be systematically obtained through a generic procedure based on semidefinite programming. This methodology is primarily based on the approach introduced by Drori and Teboulle (Mathematical Programming, 2014) and on convex interpolation results, and allows producing non-improvable worst-case analyzes. In other words, for a given algorithm, the methodology generates both worst-case certificates (i.e., proofs) and problem instances on which those bounds are achieved. Relying on this methodology, we provide three new methods with conceptually simple proofs: (i) an optimized relatively inexact proximal point method, (ii) an extension of the hybrid proximal extragradient method of Monteiro and Svaiter (SIAM Journal on Optimization, 2013), and (iii) an inexact accelerated forward-backward splitting supporting backtracking line-search, and both (ii) and (iii) supporting possibly strongly convex objectives. Finally, we use the methodology for studying a recent inexact variant of the Douglas-Rachford splitting due to Eckstein and Yao (Mathematical Programming, 2018). We showcase and compare the different variants of the accelerated inexact forward-backward method on a factorization and a total variation problem.Comment: Minor modifications including acknowledgments and references. Code available at https://github.com/mathbarre/InexactProximalOperator

    Averaging Atmospheric Gas Concentration Data using Wasserstein Barycenters

    Full text link
    Hyperspectral satellite images report greenhouse gas concentrations worldwide on a daily basis. While taking simple averages of these images over time produces a rough estimate of relative emission rates, atmospheric transport means that simple averages fail to pinpoint the source of these emissions. We propose using Wasserstein barycenters coupled with weather data to average gas concentration data sets and better concentrate the mass around significant sources

    Evaluation of intersectoral collaborations : "Relevailles" and intersectoral collaborations. Brief Report.

    Get PDF
    HIGHLIGHTS • In 3 of the 4 cases, the key actor in the collaborative network was the OCF coordinator/liaison officer. • All the networks were vulnerable to the departure of a key actor. • Collaborative networks did not include any perinatal assistants. • 35 of the 37 HSSE actors involved in collaborative networks belonged to a local community services centre (CLSC), even when there was a birthing hospital or birthing centre in the HSSE. • Five determinants contributed to or constrained intersectoral collaborations between OCFs and HSSEs. • Sufficiency of resources, knowledge of the partner organization, and complementarity/flexibility in the delivery of services were determinants of collaborations. • Six modes of OCF/HSSE collaboration were identified. • There are few formal mechanisms for collaboration between organizations. • Disagreements/misunderstandings on the mechanisms for sharing information about families occurred between organizations and even within organizations. • Some parents perceived links between OCFs and CLSCs as falling into two modes of collaboration, namely 1) activating the request/recourse to partner organization services and 2) coordinating the services provided to families

    Analyse de pire cas des méthodes du premier ordre efficaces

    No full text
    Many modern applications rely on solving optimization problems (e.g., computational biology, mechanics, finance), establishing optimization methods as crucial tools in many scientific fields. Providing guarantees on the (hopefully good) behaviors of these methods is therefore of significant interest. A standard way of analyzing optimization algorithms consists in worst-case reasoning. That is, providing guarantees on the behavior of an algorithm (e.g. its convergence speed), that are independent of the function on which the algorithm is applied and true for every function in a particular class. This thesis aims at providing worst-case analyses of a few efficient first-order optimization methods. We start by the study of Anderson acceleration methods, for which we provide new explicit worst-case bounds guaranteeing precisely when acceleration occurs. We obtained these guarantees by providing upper bounds on a variation of the classical Chebyshev optimization problem on polynomials, that we believe of independent interest. Then, we extend the Performance Estimation Problem (PEP) framework, that was originally designed for principled analyses of fixed-step algorithms, to study first-order methods with adaptive parameters. This is illustrated in particular through the worst-case analyses of the canonical gradient method with Polyak step sizes that use gradient norms and function values information, and of an accelerated version of it. The approach is also presented on other standard adaptive algorithms. Finally, the last contribution of this thesis is to further develop the PEP methodology for analyzing first-order methods relying on inexact proximal computations. Using this framework, we produce algorithms with optimized worst-case guarantees and provide (numerical and analytical) worst-case bounds for some standard algorithms in the literature.De nombreuses applications modernes reposent sur la résolution de problèmes d'optimisations (par exemple, en biologie numérique, en mécanique, en finance), faisant des méthodes d'optimisation des outils essentiels dans de nombreux domaines scientifiques. Apporter des garanties sur le comportement de ces méthodes constitue donc un axe de recherche important. Une façon classique d'analyser un algorithme d'optimisation consiste à étudier son comportement dans le pire cas. C'est à dire, donner des garanties sur son comportement (par exemple sa vitesse de convergence) qui soient indépendantes de la fonction en entrée de l'algorithme et vraies pour toutes les fonctions dans une classe donnée. Cette thèse se concentre sur l'analyse en pire cas de quelques méthodes du premier ordre réputées pour leur efficacité. Nous commençons par étudier les méthodes d'accélération d'Anderson, pour lesquelles nous donnons de nouvelles bornes de pire cas qui permettent de garantir précisément et explicitement quand l'accélération a lieu. Pour obtenir ces garanties, nous fournissons des majorations sur une variation du problème d'optimisation polynomiale de Tchebychev, dont nous pensons qu'elles constituent un résultat indépendant. Ensuite, nous prolongeons l'étude des Problèmes d'Estimation de Performances (PEP), développés à l'origine pour analyser les algorithmes d'optimisation à pas fixes, à l'analyse des méthodes adaptatives. En particulier, nous illustrons ces développements à travers l'étude des comportements en pire cas de la descente de gradient avec pas de Polyak, qui utilise la norme des gradients et les valeurs prises par la fonction objectif, ainsi que d'une nouvelle version accélérée. Nous détaillons aussi cette approche sur d'autres algorithmes adaptatifs standards. Enfin, la dernière contribution de cette thèse est de développer plus avant la méthodologie PEP pour l'analyse des méthodes du premier ordre se basant sur des opérations proximales inexactes. En utilisant cette approche, nous définissons des algorithmes dont les garanties en pire cas ont été optimisées et nous fournissons des analyses de pire cas pour quelques méthodes présentes dans la littérature

    Cadre mathématique pour la modélisation et la simulation de tissus biologiques perfusés

    No full text
    Many biological tissues can be modeled as porous media, namely continuous media composed of a solid skeleton filled by a fluid. In biological tissues, the fluid at stake can be blood, airflows in the lungs or cerebrospinal fluid, all of which can be seen as incompressible fluids. Moreover, in such applications, the porous medium itself can be considered as nearly-incompressible. The goal of this PhD thesis is to analyze a recent partial differential equation model describing the motion of a nearly-incompressible or incompressible porous medium. This model arises from the linearization of a non-linear poromechanics model adapted to soft tissue perfusion, but is also strongly connected to Biot's equations of poroelasticity. In this model, the solid and fluid equations show a hyperbolic/parabolic behavior, and are in addition coupled through the interstitial pressure associated with the incompressibility divergence constraint. The first contribution of this thesis is to show the existence and uniqueness of strong and weak solutions in the nearly-incompressible and incompressible cases. This is achieved by combining semigroup theory, energy estimates and T-coercivity. T-coercivity theory, originally developed for unconstrained problems, is extended here to treat general saddle-point and perturbed saddle-point problems. This concept also appears to be useful for the design of stable finite elements in the incompressible limit and for the numerical analysis of the system. Spatial and temporal convergence analysis are performed for a monolithic scheme, leading to robust error estimates with respect to incompressibility, porosity and permeability. In order to improve computational efficiency, a fractional-step method is proposed and analyzed. In particular, general boundary conditions connecting the fluid and the solid on the boundary are considered and imposed thanks to a Robin-Robin coupling method. Finally, the relevance of the model to biomedical applications is illustrated by comparing microvessels-on-chip simulations with experimental data.De nombreux tissus biologiques peuvent être modélisés comme des milieux poreux, c'est-à-dire des milieux continus composés d'une structure solide irriguée par un fluide. Dans les tissus biologiques, le fluide peut désigner le sang, les flux d'air dans les poumons ou encore le liquide céphalo-rachidien, fluides qui peuvent tous être considérés comme incompressibles. De plus, pour de telles applications, le milieu poreux en tant que tel est quasi-incompressible. L'objectif de cette thèse est d'analyser un modèle d'équations aux dérivées partielles récent qui décrit le mouvement d'un milieu poreux quasi-incompressible ou incompressible. Ce modèle provient de la linéarisation d'un modèle de poromécanique non linéaire adapté au contexte des tissus mous perfusés, mais il est également fortement relié aux équations de Biot en poroélasticité. Dans ce modèle, les équations du solide et du fluide ont un comportement respectivement hyperbolique et parabolique, et sont couplées par la pression interstitielle associée à la contrainte d'incompressibilité. La première contribution de cette thèse est de démontrer l'existence et l'unicité des solutions fortes ou faibles dans les cas quasi-incompressible et incompressible. La preuve repose sur une combinaison de théorie des semi-groupes, d'estimations d'énergie et fait appel à la notion de T-coercivité. Cette notion, développée originellement pour les problèmes non contraints, est ici étendue aux problèmes de type point-selle avec ou sans pénalisation. Le concept de T-coercivité s'avère également utile pour la conception d'éléments finis stables dans la limite incompressible et pour l'analyse numérique du système. La convergence spatiale et temporelle d'un schéma monolithique est prouvée, avec des estimations d'erreur robustes par rapport à l'incompressibilité, la porosité et la perméabilité. Afin d'accélérer le temps de calcul, un schéma à pas fractionnaires est proposé et analysé. En particulier, des conditions aux limites générales couplant le fluide et le solide sur le bord du domaine sont envisagées et imposées grâce à une méthode de type Robin-Robin. Enfin, la pertinence de ce modèle pour les applications biomédicales est illustrée en comparant des simulations de microvaisseaux sur puce à des données expérimentales
    • …
    corecore